Skip to main content

Runtime Prevention

info

This page only presents a POC.

Gvisor on EKS EC2​

Prerequisites​

tip

Authenticate to your AWS account via the CLI.

Create EKS cluster​

ssh-keygen -t rsa -b 4096 -C "Gvisor_EKS_POC"

eksctl create cluster --name GvisorPOC --nodes 2 --region eu-west-1 --ssh-access --ssh-public-key </Users/my_user/.ssh/Gvisor_POC.pub>
# Wait 20min...

kubectl get nodes

Select the first node and apply a label. This node will be used to test Gvisor.

export gVisor_node_name=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')

kubectl label node $gVisor_node_name runtime=gvisor

SSH into gVisor node.

export gVisor_node_EIP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}')

ssh -i </Users/my_user/.ssh/Gvisor_POC> ec2-user@$gVisor_node_EIP

Configure containerd and runsc runtime.

# Install gVisor
set -e
ARCH=$(uname -m)
URL=https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH}
wget ${URL}/runsc ${URL}/runsc.sha512 ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
sha512sum -c runsc.sha512 -c containerd-shim-runsc-v1.sha512
rm -f *.sha512
chmod a+rx runsc containerd-shim-runsc-v1
sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin

# Configure kubelet to point to containerd socket
sudo sed -i "s/'$/ --container-runtime=remote --container-runtime-endpoint=unix:\/\/\/run\/containerd\/containerd.sock'/" /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# Install gVisor runtime (will need to create runtime in gvisor and assign pods to runsc)
cat <<EOF | sudo tee /etc/containerd/config.toml
version = 2
[plugins."io.containerd.runtime.v1.linux"]
shim_debug = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
EOF

# Restart containerd
sudo systemctl restart containerd

Back on the local machine, check that the new runtime is containerd.

kubectl get nodes -o jsonpath='{.items[0].status.nodeInfo.containerRuntimeVersion}'
# containerd://1.4.6

# The runtime of the node without Gvisor is docker
kubectl get nodes -o jsonpath='{.items[1].status.nodeInfo.containerRuntimeVersion}'
# docker://20.10.7

Install the RuntimeClass for gVisor.

cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
EOF

Create a Pod with the gVisor RuntimeClass.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx-gvisor
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
EOF
# Is the Pod running?
kubectl get pod nginx-gvisor -o wide

# Is the Pod running correctly?
kubectl run --rm --restart=Never --stdin --tty --image=curlimages/curl curl -- curl $(kubectl get pod nginx-gvisor -o jsonpath='{.status.podIP}')

# Check the Kernel of the Pod
kubectl exec -it nginx-gvisor -- /bin/bash
## root@nginx-gvisor:/# uname -a
### Linux nginx-gvisor 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux

Reference​